416 research outputs found
A Boxology of Design Patterns for Hybrid Learning and Reasoning Systems
We propose a set of compositional design patterns to describe a large variety
of systems that combine statistical techniques from machine learning with
symbolic techniques from knowledge representation. As in other areas of
computer science (knowledge engineering, software engineering, ontology
engineering, process mining and others), such design patterns help to
systematize the literature, clarify which combinations of techniques serve
which purposes, and encourage re-use of software components. We have validated
our set of compositional design patterns against a large body of recent
literature.Comment: 12 pages,55 reference
Semi-Supervised Learning using Differentiable Reasoning
We introduce Differentiable Reasoning (DR), a novel semi-supervised learning
technique which uses relational background knowledge to benefit from unlabeled
data. We apply it to the Semantic Image Interpretation (SII) task and show that
background knowledge provides significant improvement. We find that there is a
strong but interesting imbalance between the contributions of updates from
Modus Ponens (MP) and its logical equivalent Modus Tollens (MT) to the learning
process, suggesting that our approach is very sensitive to a phenomenon called
the Raven Paradox. We propose a solution to overcome this situation
On the efficiency of meta-level inference
In this thesis we will be concerned with a particular type of architecture for reasoning
systems, known as meta-level architectures. After presenting the arguments for such
architectures (chapter 1), we discuss a number of systems in the literature that provide an
explicit meta-level architecture (chapter 2), and these systems are compared on the basis
of a number of distinguishing characteristics. This leads to a classification of meta-level
architectures (chapter 3). Within this classification we compare the different types of
architectures, and argue that one of these types, called bilingual meta-level inference
systems, has a number of advantages over the other types. We study the general structure
of bilingual meta-level inference architectures (chapter 4), and we discuss the details of a
system that we implemented which has this architecture (chapter 5). One of the problems
that this type of system suffers from is the overhead that is incurred by the meta-level
effort. We give a theoretical model of this problem, and we perform measurements which
show that this problem is indeed a significant one (chapter 6). Chapter 7 discusses partial
evaluation, the main technique available in the literature to reduce the meta-level
overhead. This technique, although useful, suffers from a number of serious problems. We
propose two further techniques, partial reflection and many-sorted logic (chapters 8 and
9), which can be used to reduce the problem of meta-level overhead without suffering from
these problems
Analyzing Differentiable Fuzzy Implications
Combining symbolic and neural approaches has gained considerable attention in
the AI community, as it is often argued that the strengths and weaknesses of
these approaches are complementary. One such trend in the literature are weakly
supervised learning techniques that employ operators from fuzzy logics. In
particular, they use prior background knowledge described in such logics to
help the training of a neural network from unlabeled and noisy data. By
interpreting logical symbols using neural networks (or grounding them), this
background knowledge can be added to regular loss functions, hence making
reasoning a part of learning.
In this paper, we investigate how implications from the fuzzy logic
literature behave in a differentiable setting. In such a setting, we analyze
the differences between the formal properties of these fuzzy implications. It
turns out that various fuzzy implications, including some of the most
well-known, are highly unsuitable for use in a differentiable learning setting.
A further finding shows a strong imbalance between gradients driven by the
antecedent and the consequent of the implication. Furthermore, we introduce a
new family of fuzzy implications (called sigmoidal implications) to tackle this
phenomenon. Finally, we empirically show that it is possible to use
Differentiable Fuzzy Logics for semi-supervised learning, and show that
sigmoidal implications outperform other choices of fuzzy implications.Comment: 10 pages, 10 figures, accepted to 17th International Conference on
Principles of Knowledge Representation and Reasoning (KR 2020). arXiv admin
note: substantial text overlap with arXiv:2002.0610
Rough Set Semantics for Identity on the Web
Identity relations are at the foundation of many logic-based knowledge representations. We argue that the traditional notion of equality, is unsuited for many realistic knowledge representation settings. The classical interpretation of equality is too strong when the equality statements are re-used outside their original context. On the Semantic Web, equality statements are used to interlink multiple descriptions of the same object, using owl:sameAs assertions. And indeed, many practical uses of owl:sameAs are known to violate the formal Leibniz-style semantics. We provide a more flexible semantics to identity by assigning meaning to the subrelations of an identity relation in terms of the predicates that are used in a knowledge-base. Using those indiscernability-predicates, we define upper and lower approximations of equality in the style of rought-set theory, resulting in a quality-measure for identity relations
Towards Semantically Enriched Embeddings for Knowledge Graph Completion
Embedding based Knowledge Graph (KG) Completion has gained much attention
over the past few years. Most of the current algorithms consider a KG as a
multidirectional labeled graph and lack the ability to capture the semantics
underlying the schematic information. In a separate development, a vast amount
of information has been captured within the Large Language Models (LLMs) which
has revolutionized the field of Artificial Intelligence. KGs could benefit from
these LLMs and vice versa. This vision paper discusses the existing algorithms
for KG completion based on the variations for generating KG embeddings. It
starts with discussing various KG completion algorithms such as transductive
and inductive link prediction and entity type prediction algorithms. It then
moves on to the algorithms utilizing type information within the KGs, LLMs, and
finally to algorithms capturing the semantics represented in different
description logic axioms. We conclude the paper with a critical reflection on
the current state of work in the community and give recommendations for future
directions
- …